General Theory of Inferential Models Ii
نویسندگان
چکیده
This paper is a continuation of the authors' theoretical investigation of inferential model (IMs); see Martin, Hwang and Liu (2010). The fundamental idea is that prior-free posterior probability-like inference with desirable long-run frequency properties can be achieved through a system based on predicting unobserved auxiliary variables. In Part I, an intermediate conditioning step was proposed to reduce the dimension of the auxiliary variable to be predicted, making the construction of efficient IMs more manageable. Here we consider the problem of inference in the presence of nuisance parameters, and we show that such problems admit a further auxiliary variable reduction via marginalization. Unlike classical procedures that use optimization or integration, the proposed framework eliminates nuisance parameters via a set union operation. Sufficient conditions are given for when this marginalization operation can be performed without loss of information, and in such cases we prove that an appropriately constructed IM is calibrated, in a frequentist sense, for marginal inference. In problems where these sufficient conditions are not met, we propose a marginalization technique based on parameter expansion that leads to conservative marginal inference. The marginal IM approach is illustrated on a number of examples, including Stein's problem and the Behrens-Fisher problem. 1. Introduction. In statistical inference problems, it is often the case that only some components (or, more generally, some lower-dimensional functions) of the parameter vector θ are of interest. Linear regression, with θ = (β, σ 2), is one such example where primary interest is in the vector β of regression coefficients. Semiparametric problems (Bickel et al. 1998), such as the Cox proportional hazards model, form another important class of examples. More formally, suppose θ can be decomposed as θ = (ψ, ξ), where ψ is the parameter of interest and ξ is the nuisance parameter. The goal is to make inference on ψ in the presence of unknown ξ. In these nuisance parameter problems, a modification of the classical likelihood framework is called for. Frequentists often opt for profile likelihood
منابع مشابه
General Theory of Inferential Models Ii . Marginal Inference
This paper is a continuation of the authors’ theoretical investigation of inferential model (IMs); see Martin, Hwang and Liu (2010). The fundamental idea is that prior-free posterior probability-like inference with desirable long-run frequency properties can be achieved through a system based on predicting unobserved auxiliary variables. In Part I, an intermediate conditioning step was proposed...
متن کاملInferential models for linear regression
Linear regression is arguably one of the most widely used statistical methods. However, important problems, especially variable selection, remain a challenge for classical modes of inference. This paper develops a recently proposed framework of inferential models (IMs) in the linear regression context. In general, the IM framework is able to produce meaningful probabilistic summaries of the sta...
متن کاملAbstract deduction and inferential models for type theory
Deduction and Inferential Models for Type Theory Paolo Gentilini, Maurizio Martelli ANSAS Liguria, Via Assarotti 15, Genova, Italy
متن کاملHigh Dimensional EM Algorithm: Statistical Optimization and Asymptotic Normality
We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models. In particular, we make two contributions: (i) For parameter estimation, we propose a novel high dimensional EM algorithm which naturally incorporates sparsity structure into parameter estimation. With an appropriate initialization, this algorithm converges at a geome...
متن کاملHigh Dimensional Expectation-Maximization Algorithm: Statistical Optimization and Asymptotic Normality
We provide a general theory of the expectation-maximization (EM) algorithm for inferring high dimensional latent variable models. In particular, we make two contributions: (i) For parameter estimation, we propose a novel high dimensional EM algorithm which naturally incorporates sparsity structure into parameter estimation. With an appropriate initialization, this algorithm converges at a geome...
متن کاملInterval-valued regression and classification models in the framework of machine learning
We present a new approach for constructing regression and classification models for interval-valued data. The risk functional is considered under a set of probability distributions, resulting from the application of a chosen inferential method to the data, such that the bounding distributions of the set depend on the regression and classification parameter. Two extreme (‘pessimistic’ and ‘optim...
متن کامل